hysop.operator.hdf_io module

I/O operators

  • HDF_Writer : operator to write fields into an hdf file

  • HDF_Reader : operator to read fields from an hdf file

  • HDF_IO abstract interface for hdf io classes

class hysop.operator.hdf_io.HDF_IO(var_names=None, name_prefix='', name_postfix='', force_backend=None, **kwds)[source]

Bases: HostOperatorBase

Abstract interface to read/write from/to hdf files, for hysop fields.

Read/write some fields data from/into hdf/xmdf files. Parallel io.

Parameters:
  • var_names (a dictionnary, optional) – keys = Field, values = string, field name. See notes below.

  • name_prefix (str, optional) – Optional name prefix for variables.

  • name_postfix (str, optional) – Optional name postfix for variables.

  • force_backend (hysop.constants.Backend) – Force the source backend for fields.

  • kwds (dict) – Base class arguments.

Notes

Dataset in hdf files are identified with names. In hysop, when writing a file, default dataset name is ‘continuous_field.name + topo.id + component direction’, but this might be changed thanks to var_names argument. For example, if input_fields=[velo, vorti], and if hdf file contains ‘vel_1_X, vel_1_Y, vel_1_Z, dat_2_X, dat_2_Y, dat_2_Z’ keys, then use : var_names = {velo: ‘vel’, vorti:’dat’} if you want to read vel/dat into velo/vorti.

create_topology_descriptors()[source]
Called in get_field_requirements, just after handle_method
Topology requirements (or descriptors) are:
  1. min and max ghosts for each input and output variables

  2. allowed splitting directions for cartesian topologies

discretize()[source]

By default, an operator discretize all its variables. For each input continuous field that is also an output field, input topology may be different from the output topology.

After this call, one can access self.input_discrete_fields and self.output_discrete_fields, which contains input and output dicretised fields mapped by continuous fields.

self.discrete_fields will be a tuple containing all input and output discrete fields.

Discrete tensor fields are built back from discretized scalar fields and are accessible from self.input_tensor_fields, self.output_tensor_fields and self.discrete_tensor_fields like their scalar counterpart.

get_field_requirements()[source]

Called just after handle_method(), ie self.method has been set. Field requirements are:

  1. required local and global transposition state, if any.

  2. required memory ordering (either C or Fortran)

Default is Backend.HOST, no min or max ghosts, MemoryOrdering.ANY and no specific default transposition state for each input and output variables.

get_node_requirements()[source]

Called after get_field_requirements to get global operator requirements.

By default we enforce unique:

*transposition state *cartesian topology shape *memory order (either C or fortran)

Across every fields.

open_hdf(count, mode, compression='gzip')[source]
classmethod supported_backends()[source]

Return the backends that this operator’s topologies can support.

classmethod supports_mpi()[source]

Return True if this operator was implemented to support multiple mpi processes.

classmethod supports_multiple_topologies()[source]

Should return True if this node supports multiple topologies.

class hysop.operator.hdf_io.HDF_Reader(variables, restart=None, name=None, **kwds)[source]

Bases: HDF_IO

Parallel reading of hdf/xdmf files to fill some fields in.

Read some fields data from hdf/xmdf files. Parallel readings.

Parameters:
  • restart (int, optional) – number of a specific iteration to be read, default=None, i.e. read first iteration.

  • kwds (base class arguments)

  • Notes (restart corresponds to the number which appears in)

  • name (the hdf file)

  • the (corresponding to the number of)

  • occured. (iteration where writing)

  • tests_hdf_io.py (See examples in)

apply(**kwds)

Abstract method that should be implemented. Applies this node (operator, computational graph operator…).

dataset_names()[source]

Return the list of available names for datasets in the required file.

finalize()[source]

Cleanup this node (free memory from external solvers, …) By default, this does nothing

class hysop.operator.hdf_io.HDF_Writer(variables, name=None, pretty_name=None, **kwds)[source]

Bases: HDF_IO

Print field(s) values on a given topo, in HDF5 format.

Write some fields data into hdf/xmdf files. Parallel writings.

Parameters:

kwds (base class arguments)

apply(**kwds)

Abstract method that should be implemented. Applies this node (operator, computational graph operator…).

createXMFFile()[source]

Create and fill the header of the xdmf file.

finalize()[source]

Cleanup this node (free memory from external solvers, …) By default, this does nothing

get_work_properties(**kwds)[source]

Returns extra memory requirements of this operator. This allows operators to request for temporary buffers that will be shared between operators in a graph to reduce the memory footprint and the number of allocations.

Returned memory is only usable during operator call (ie. in self.apply). Temporary buffers may be shared between different operators as determined by the graph builder.

By default if there is no input nor output temprary fields, this returns no requests, meanning that this node requires no extra buffers.

If temporary fields are present, their memory request are automatically computed and returned.

openXMFFile()[source]

Open an existing xdmf file.

setup(work, **kwds)[source]

Setup temporary buffer that have been requested in get_work_properties(). This function may be used to execute post allocation routines. This sets self.ready flag to True. Once this flag is set one may call ComputationalGraphNode.apply() and ComputationalGraphNode.finalize().

Automatically honour temporary field memory requests.

updateXMFFile()[source]

Update xdmf file.